Data Integration
End-to-end data-driven weather prediction
A new AI weather prediction system, developed by a team of researchers from the University of Cambridge, can deliver accurate forecasts which use less computing power than current AI and physics-based forecasting systems. The system, Aardvark Weather, has been supported by the Alan Turing Institute, Microsoft Research and the European Centre for Medium Range Weather Forecasts. It provides a blueprint for a new approach to weather forecasting with the potential to improve current practices. The results are reported in the journal Nature. "Aardvark reimagines current weather prediction methods offering the potential to make weather forecasts faster, cheaper, more flexible and more accurate than ever before, helping to transform weather prediction in both developed and developing countries," said Professor Richard Turner from Cambridge's Department of Engineering, who led the research.
VeXKD: The Versatile Integration of Cross-Modal Fusion and Knowledge Distillation for 3D Perception
Recent advancements in 3D perception have led to a proliferation of network architectures, particularly those involving multi-modal fusion algorithms. While these fusion algorithms improve accuracy, their complexity often impedes real-time performance. This paper introduces VeXKD, an effective and Versatile framework that integrates Cross-Modal Fusion with Knowledge Distillation. VeXKD applies knowledge distillation exclusively to the Bird's Eye View (BEV) feature maps, enabling the transfer of cross-modal insights to single-modal students without additional inference time overhead. It avoids volatile components that can vary across various 3D perception tasks and student modalities, thus improving versatility. The framework adopts a modality-general cross-modal fusion module to bridge the modality gap between the multi-modal teachers and single-modal students. Furthermore, leveraging byproducts generated during fusion, our BEV query guided mask generation network identifies crucial spatial locations across different BEV feature maps from different tasks and semantic levels in a datadriven manner, significantly enhancing the effectiveness of knowledge distillation. Extensive experiments on the nuScenes dataset demonstrate notable improvements, with up to 6.9%/4.2%
MobiFuse: Learning Universal Human Mobility Patterns through Cross-domain Data Fusion
Ma, Haoxuan, Liao, Xishun, Liu, Yifan, Jiang, Qinhua, Stanford, Chris, Cao, Shangqing, Ma, Jiaqi
Human mobility modeling is critical for urban planning and transportation management, yet existing datasets often lack the resolution and semantic richness required for comprehensive analysis. To address this, we proposed a cross-domain data fusion framework that integrates multi-modal data of distinct nature and spatio-temporal resolution, including geographical, mobility, socio-demographic, and traffic information, to construct a privacy-preserving and semantically enriched human travel trajectory dataset. This framework is demonstrated through two case studies in Los Angeles (LA) and Egypt, where a domain adaptation algorithm ensures its transferability across diverse urban contexts. Quantitative evaluation shows that the generated synthetic dataset accurately reproduces mobility patterns observed in empirical data. Moreover, large-scale traffic simulations for LA County based on the generated synthetic demand align well with observed traffic. On California's I-405 corridor, the simulation yields a Mean Absolute Percentage Error of 5.85% for traffic volume and 4.36% for speed compared to Caltrans PeMS observations.
Kalman Filter, Sensor Fusion, and Constrained Regression: Equivalences and Insights
The Kalman filter (KF) is one of the most widely used tools for data assimilation and sequential estimation. In this work, we show that the state estimates from the KF in a standard linear dynamical system setting are equivalent to those given by the KF in a transformed system, with infinite process noise (i.e., a flat prior'') and an augmented measurement space. This reformulation---which we refer to as augmented measurement sensor fusion (SF)---is conceptually interesting, because the transformed system here is seemingly static (as there is effectively no process model), but we can still capture the state dynamics inherent to the KF by folding the process model into the measurement space. Further, this reformulation of the KF turns out to be useful in settings in which past states are observed eventually (at some lag). Here, when the measurement noise covariance is estimated by the empirical covariance, we show that the state predictions from SF are equivalent to those from a regression of past states on past measurements, subject to particular linear constraints (reflecting the relationships encoded in the measurement map). This allows us to port standard ideas (say, regularization methods) in regression over to dynamical systems.
Human Digital Twins in Personalized Healthcare: An Overview and Future Perspectives
This evolution indicates an expansion from industrial uses into diverse fields, including healthcare [61], [59]. The core functionalities of digital twins include an accurate mirroring of their physical counterparts, capturing all associated processes in a data-driven manner, maintaining a continuous connection that synchronizes with the real-time state of their physical twins, and simulating physical behavior for predictive analysis [85]. In the context of healthcare, a novel extension of this technology manifests in the form of Human Digital Twins (HDTs), designed to provide a comprehensive digital mirror of individual patients. HDTs not only represent physical attributes but also integrate dynamic changes across molecular, physiological, and behavioral dimensions. This advancement is aligned with a shift toward personalized healthcare (PH) paradigms, enabling tailored treatment strategies based on a patient's unique health profile, thereby enhancing preventive, diagnostic, and therapeutic processes in clinical settings [44], [50]. The personalization aspect of HDTs underscores their potential to revolutionize healthcare by facilitating precise and individualized treatment plans that optimize patient outcomes [72]. Although the potential of digital twins in healthcare has garnered much attention, practical applications remain newly developing, with critical literature highlighting that many implementations are still in exploratory stages [59]. Notably, institutions like the IEEE Computer Society and Gartner recognize this technology as a pivotal component in the ongoing evolution of healthcare systems that emphasize both precision and personalization [31], [89].
V2X-ReaLO: An Open Online Framework and Dataset for Cooperative Perception in Reality
Xiang, Hao, Zheng, Zhaoliang, Xia, Xin, Zhao, Seth Z., Gao, Letian, Zhou, Zewei, Cai, Tianhui, Zhang, Yun, Ma, Jiaqi
Cooperative perception enabled by Vehicle-to-Everything (V2X) communication holds significant promise for enhancing the perception capabilities of autonomous vehicles, allowing them to overcome occlusions and extend their field of view. However, existing research predominantly relies on simulated environments or static datasets, leaving the feasibility and effectiveness of V2X cooperative perception especially for intermediate fusion in real-world scenarios largely unexplored. In this work, we introduce V2X-ReaLO, an open online cooperative perception framework deployed on real vehicles and smart infrastructure that integrates early, late, and intermediate fusion methods within a unified pipeline and provides the first practical demonstration of online intermediate fusion's feasibility and performance under genuine real-world conditions. Additionally, we present an open benchmark dataset specifically designed to assess the performance of online cooperative perception systems. This new dataset extends V2X-Real dataset to dynamic, synchronized ROS bags and provides 25,028 test frames with 6,850 annotated key frames in challenging urban scenarios. By enabling real-time assessments of perception accuracy and communication lantency under dynamic conditions, V2X-ReaLO sets a new benchmark for advancing and optimizing cooperative perception systems in real-world applications. The codes and datasets will be released to further advance the field.
Availability-aware Sensor Fusion via Unified Canonical Space for 4D Radar, LiDAR, and Camera
Paek, Dong-Hee, Kong, Seung-Hyun
Sensor fusion of camera, LiDAR, and 4-dimensional (4D) Radar has brought a significant performance improvement in autonomous driving (AD). However, there still exist fundamental challenges: deeply coupled fusion methods assume continuous sensor availability, making them vulnerable to sensor degradation and failure, whereas sensor-wise cross-attention fusion methods struggle with computational cost and unified feature representation. This paper presents availability-aware sensor fusion (ASF), a novel method that employs unified canonical projection (UCP) to enable consistency in all sensor features for fusion and cross-attention across sensors along patches (CASAP) to enhance robustness of sensor fusion against sensor degradation and failure. As a result, the proposed ASF shows a superior object detection performance to the existing state-of-the-art fusion methods under various weather and sensor degradation (or failure) conditions; Extensive experiments on the K-Radar dataset demonstrate that ASF achieves improvements of 9.7% in AP BEV (87.2%) and 20.1% in AP 3D (73.6%) in object detection at IoU=0.5, while requiring a low computational cost. The code will be available at https://github.com/kaist-avelab/K-Radar.
An End-to-End Learning-Based Multi-Sensor Fusion for Autonomous Vehicle Localization
Lin, Changhong, Lin, Jiarong, Sui, Zhiqiang, Qu, XiaoZhi, Wang, Rui, Sheng, Kehua, Zhang, Bo
Multi-sensor fusion is essential for autonomous vehicle localization, as it is capable of integrating data from various sources for enhanced accuracy and reliability. The accuracy of the integrated location and orientation depends on the precision of the uncertainty modeling. Traditional methods of uncertainty modeling typically assume a Gaussian distribution and involve manual heuristic parameter tuning. However, these methods struggle to scale effectively and address long-tail scenarios. To address these challenges, we propose a learning-based method that encodes sensor information using higher-order neural network features, thereby eliminating the need for uncertainty estimation. This method significantly eliminates the need for parameter fine-tuning by developing an end-to-end neural network that is specifically designed for multi-sensor fusion. In our experiments, we demonstrate the effectiveness of our approach in real-world autonomous driving scenarios. Results show that the proposed method outperforms existing multi-sensor fusion methods in terms of both accuracy and robustness. A video of the results can be viewed at https://youtu.be/q4iuobMbjME.
Learning-Based Leader Localization for Underwater Vehicles With Optical-Acoustic-Pressure Sensor Fusion
Yang, Mingyang, Sha, Zeyu, Zhang, Feitian
Underwater vehicles have emerged as a critical technology for exploring and monitoring aquatic environments. The deployment of multi-vehicle systems has gained substantial interest due to their capability to perform collaborative tasks with improved efficiency. However, achieving precise localization of a leader underwater vehicle within a multi-vehicle configuration remains a significant challenge, particularly in dynamic and complex underwater conditions. To address this issue, this paper presents a novel tri-modal sensor fusion neural network approach that integrates optical, acoustic, and pressure sensors to localize the leader vehicle. The proposed method leverages the unique strengths of each sensor modality to improve localization accuracy and robustness. Specifically, optical sensors provide high-resolution imaging for precise relative positioning, acoustic sensors enable long-range detection and ranging, and pressure sensors offer environmental context awareness. The fusion of these sensor modalities is implemented using a deep learning architecture designed to extract and combine complementary features from raw sensor data. The effectiveness of the proposed method is validated through a custom-designed testing platform. Extensive data collection and experimental evaluations demonstrate that the tri-modal approach significantly improves the accuracy and robustness of leader localization, outperforming both single-modal and dual-modal methods.